Explainable Artificial Intelligence for Sarcasm Detection in Dialogues

نویسندگان

چکیده

Sarcasm detection in dialogues has been gaining popularity among natural language processing (NLP) researchers with the increased use of conversational threads on social media. Capturing knowledge domain discourse, context propagation during course dialogue, and situational tone speaker are some important features to train machine learning models for detecting sarcasm real time. As comedies vibrantly represent human mannerism behaviour everyday real-life situations, this research demonstrates an ensemble supervised algorithm detect benchmark dialogue dataset, MUStARD. The punch-line utterance its associated taken as eXtreme Gradient Boosting (XGBoost) method. primary goal is predict each using chronological nature a scene. Further, it vital prevent model bias help decision makers understand how right way. Therefore, twin research, we make used interpretable. This done two post hoc interpretability approaches, Local Interpretable Model-agnostic Explanations (LIME) Shapley Additive exPlanations (SHAP), generate explanations output trained classifier. classification results clearly depict importance capturing intersentence threads. methods show words (features) that influence most user making dialogues.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Explainable Artificial Intelligence for Training and Tutoring

This paper describes an Explainable Artificial Intelligence (XAI) tool that allows entities to answer questions about their activities within a tactical simulation. We show how XAI can be used to provide more meaningful after-action reviews and discuss ongoing work to integrate an intelligent tutor into the XAI framework.

متن کامل

Building Explainable Artificial Intelligence Systems

As artificial intelligence (AI) systems and behavior models in military simulations become increasingly complex, it has been difficult for users to understand the activities of computer-controlled entities. Prototype explanation systems have been added to simulators, but designers have not heeded the lessons learned from work in explaining expert system behavior. These new explanation systems a...

متن کامل

Automated Reasoning for Explainable Artificial Intelligence

Reasoning and learning have been considered fundamental features of intelligence ever since the dawn of the field of artificial intelligence, leading to the development of the research areas of automated reasoning and machine learning. This paper discusses the relationship between automated reasoning and machine learning, and more generally between automated reasoning and artificial intelligenc...

متن کامل

Explainable Artificial Intelligence via Bayesian Teaching

Modern machine learning methods are increasingly powerful and opaque. This opaqueness is a concern across a variety of domains in which algorithms are making important decisions that should be scrutable. The explainabilty of machine learning systems is therefore of increasing interest. We propose an explanation-byexamples approach that builds on our recent research in Bayesian teaching in which...

متن کامل

An Explainable Artificial Intelligence System for Small-unit Tactical Behavior

As the artificial intelligence (AI) systems in military simulations and computer games become more complex, their actions become increasingly difficult for users to understand. Expert systems for medical diagnosis have addressed this challenge though the addition of explanation generation systems that explain a system’s internal processes. This paper describes the AI architecture and associated...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Wireless Communications and Mobile Computing

سال: 2021

ISSN: ['1530-8669', '1530-8677']

DOI: https://doi.org/10.1155/2021/2939334